Goto

Collaborating Authors

 robust accuracy


Characterization of Overfitting in Robust Multiclass Classification

Neural Information Processing Systems

Nonetheless, modern machine learning is adaptive in its nature. Prior information about a model's performance on the test set inevitably influences


DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification

Neural Information Processing Systems

Recent studies show that even advanced attacks cannot break such defenses effectively, since the purification process induces an extremely deep computational graph which poses the potential problem of vanishing/exploding gradient, high memory cost, and unbounded randomness.


Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization

Neural Information Processing Systems

However, SSA T suffers from catastrophic overfit-ting (CO), a phenomenon that leads to a severely distorted classifier, making it vulnerable to multi-step adversarial attacks. In this work, we observe that some adversarial examples generated on the SSA T -trained network exhibit anomalous behaviour, that is, although these training samples are generated by the inner maximization process, their associated loss decreases instead, which we named abnormal adversarial examples (AAEs).


Supplementary Materia: Revisiting Visual Model Robustness: A Frequency Long-Tailed Distribution View Zhiyu Lin

Neural Information Processing Systems

Fan et al. [2021] incorporates high-frequency views into contrastive learning, leading to the transfer However, there are also several works that challenge the validity of this assumption. Yin et al. [2019] proposes a robustness analysis strategy based on Fourier Heatmaps, which utilizes a model's sensitivity to frequency-bases. Maiya et al. [2021] believes that model robustness does not have an intrinsic connection In addition to the perspective on frequency components, Chen et al. [2021] has shown that the CNN model should be consistent with the Human Visual System, with To show the power law distribution of natural images, we select CIFAR-10 Krizhevsky et al. [2009], Tiny-ImageNet Le and Y ang [2015] and ImageNet Deng et al. [2009] to conduct experiments. We show an example of division on ImageNet, as shown in Fig.2, in which the high-and low-frequency components of the image obtained according to the division radius are also in line with our We conduct experiments on naturally trained models. We conduct experiments on test set of CIFAR10, Tiny-ImageNet, ImageNet-1k datasets.





A Broader Impacts

Neural Information Processing Systems

MIM to enhance the adversarial robustness of downstream models. It is important to highlight that our paper's focus is specifically on the adversarial robustness of ViTs. It is shown that our method can provide an effective defense against severe adversarial attacks. We propose two hypotheses for explaining the reason behind our method's effectiveness: (1) Given Figure 3 (a) shows the comparison between the results of noise being known and unknown. When the attacker can access the noise, our model's robust accuracy does not improve much as The results indicate that both proposed hypotheses are true.



Adversarial Robustness through Random Weight Sampling

Neural Information Processing Systems

Deep neural networks have been found to be vulnerable in a variety of tasks. Adversarial attacks can manipulate network outputs, resulting in incorrect predictions.